# Text Retrieval
Qwen3 Embedding 0.6B W4A16 G128
Apache-2.0
GPTQ quantized version of Qwen3-Embedding-0.6B, optimized for video memory usage with minimal performance loss
Text Embedding
Q
boboliu
131
2
PEG
PEG is a model that achieves robust text retrieval through progressive learning, adjusting loss weights based on the difficulty levels of negative samples.
Text Embedding
Transformers Chinese

P
TownsWu
36
29
BGE M3 Mindspore GGUF
GGUF quantized version of BGE_M3_Mindspore, offering multiple quantization options to suit different needs.
Large Language Model English
B
mradermacher
49
1
Featured Recommended AI Models